Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Video abnormal behavior detection based on dual prediction model of appearance and motion features
LI Ziqiang, WANG Zhengyong, CHEN Honggang, LI Linyi, HE Xiaohai
Journal of Computer Applications    2021, 41 (10): 2997-3003.   DOI: 10.11772/j.issn.1001-9081.2020121906
Abstract434)      PDF (1399KB)(457)       Save
In order to make full use of appearance and motion information in video abnormal behavior detection, a Siamese network model that can capture appearance and motion information at the same time was proposed. The two branches of the network were composed of the same autoencoder structure. Several consecutive frames of RGB images were used as the input of the appearance sub-network to predict the next frame, while RGB frame difference image was used as the input of the motion sub-network to predict the future frame difference. In addition, considering one of the reasons that affected the detection effect of the prediction-based method, that is the diversity of normal samples, and the powerful "generation" ability of the autoencoder network, that is it has a good prediction effect on some abnormal samples. Therefore, a memory enhancement module that learns and stores the "prototype" features of normal samples was added between the encoder and the decoder, so that the abnormal samples were able to obtain greater prediction error. Extensive experiments were conducted on three public anomaly detection datasets Avenue, UCSD-ped2 and ShanghaiTech. Experimental results show that, compared with other video abnormal behavior detection methods based on reconstruction or prediction, the proposed method achieves better performance. Specifically, the average Area Under Curve (AUC) of the proposed method on Avenue, UCSD-ped2 and ShanghaiTech datasets reach 88.2%, 97.5% and 73.0% respectively.
Reference | Related Articles | Metrics
Scheduling method for big data tasks
LI Ziying, SHI Zhenguo
Journal of Computer Applications    2020, 40 (10): 2923-2928.   DOI: 10.11772/j.issn.1001-9081.2020030348
Abstract495)      PDF (903KB)(478)       Save
Because the division and resource allocation of big data tasks lacks rationality in big data processing procedure, a scheduling method for big data tasks was proposed. First, in order to establish a reasonable management system of big data tasks and standardize the big data task processing flow, the scheduling theory was introduced to handle big data tasks. Then, based on the natures of big data tasks, the datasets were analyzed and handled, the decision table was introduced to perform attribute reduction, so as to reduce the data amount of big data analysis tasks and improve the big data analysis efficiency. Finally, the fuzzy comprehensive evaluation method was adopted, and the result of fuzzy comprehensive evaluation was used as the basis for task scheduling, thereby improving the rationality of task resource allocation. Experimental results on University of California Irvine (UCI) datasets show that the average prediction accuracy of the proposed scheduling algorithm is 7.42 percentage points higher than that of the Naive Bayes (NB) algorithm, 5.16 percentage points higher than that of the error Back Propagation (BP) algorithm, and 3.74 percentage points higher than that of the Root Mean Square Prop (RMSProp) algorithm. For datasets with a large number of features, the prediction accuracy of the proposed algorithm is significantly improved compared to those of other algorithms. Compared with Heterogeneous Critcal Path First Synthesis (HCPFS) algorithm and Heterogeneous Improved Priority List for Task Scheduling (HIPLTS) algorithm, the proposed algorithm has the average Scheduling Length Ratio (SLR) decreased by 12.14% and 4.56% respectively, and the average speedup ratio increased by 7.14% and 42.56% respectively, showing that the algorithm can effectively improve the efficiency of task scheduling in big data systems. Comprehensive analysis shows that the proposed algorithm performs well in prediction accuraing, and is efficient and reliable.
Reference | Related Articles | Metrics
Hybrid recommendation algorithm based on rating filling and trust information
SHEN Xueli, LI Zijian, HE Chenhao
Journal of Computer Applications    2020, 40 (10): 2789-2794.   DOI: 10.11772/j.issn.1001-9081.2020020267
Abstract533)      PDF (904KB)(874)       Save
Aiming at the problem of poor recommendation effect caused by the data sparsity of the recommendation system, a hybrid recommendation algorithm based on rating filling and trust information was proposed namely RTWSO (Real-value user item restricted Boltzmann machine Trust Weighted Slope One). Firstly, the improved restricted Boltzmann machine model was used to fill the rating matrix, so as to alleviate the sparseness problem of the rating matrix. Secondly, the trust and trusted relationships were extracted from the trust relationship, and the matrix decomposition based implicit trust relationship similarity was also used to solve the problem of trust relationship sparsity. The modification including trust information was performed to the original algorithm, improving the recommendation accuracy. Finally, the Weighted Slope One (WSO) algorithm was used to integrate the matrix filling and trust similarity information as well as predict the rating data. The performance of the proposed hybrid recommendation algorithm was verified on Epinions and Ciao datasets. It can be seen that the proposed hybrid recommendation algorithm has the recommendation accuracy improved by more than 3% compared with the composition algorithm, and recommendation accuracy increased by more than 1.2% compared with the existing social recommendation algorithm SocialIT (Social recommendation algorithm based on Implict similarity in Trust). Experimental results show that the proposed hybrid recommendation method based on rating filling and trust information, improves the recommended accuracy to a certain extent.
Reference | Related Articles | Metrics
Task scheduling strategy based on data stream classification in Heron
ZHANG Yitian, YU Jiong, LU Liang, LI Ziyang
Journal of Computer Applications    2019, 39 (4): 1106-1116.   DOI: 10.11772/j.issn.1001-9081.2018081848
Abstract516)      PDF (1855KB)(364)       Save
In a new platform for big data stream processing called Heron, the round-robin scheduling algorithm is usually used for task scheduling by default, which does not consider the topology runtime state and the impact of different communication modes among task instances on Heron's performance. To solve this problem, a task scheduling strategy based on Data Stream Classification in Heron (DSC-Heron) was proposed, including data stream classification algorithm, data stream cluster allocation algorithm and data stream classification scheduling algorithm. Firstly, the instance allocation model of Heron was established to clarify the difference in communication overhead among different communication modes of the task instances. Secondly, the data stream was classified according to the real-time data stream size between task instances based on the data stream classification model of Heron. Finally, the packing plan of Heron was constructed by using the interrelated high-frequency data streams as the basic scheduling unit to complete the scheduling to minimize the communication cost by transforming inter-node data streams into intra-node ones as many as possible. After running SentenceWordCount, WordCount and FileWordCount topologies in a Heron cluster environment with 9 nodes, the results show that compared with the Heron default scheduling strategy, DSC-Heron has 8.35%, 7.07% and 6.83% improvements in system complete latency, inter-node communication overhead and system throughput respectively; in the load balancing aspect, the standard deviations of CPU usage and memory usage of the working nodes are decreased by 41.44% and 41.23% respectively. All experimental results show that DSC-Heron can effectively improve the performance of the topologies, and has the most significant optimization effect on FileWordCount topology which is close to the real application scenario.
Reference | Related Articles | Metrics
Dynamic task dispatching strategy for stream processing based on flow network
LI Ziyang, YU Jiong, BIAN Chen, LU Liang, PU Yonglin
Journal of Computer Applications    2018, 38 (9): 2560-2567.   DOI: 10.11772/j.issn.1001-9081.2017122910
Abstract1237)      PDF (1352KB)(465)       Save
Concerning the problem that sharp increase of data input rate leads to the rising of computing latency which influences the real-time of computing in big data stream processing platform, a dynamic dispatching strategy based on flow network was proposed and applied to a data stream processing platform named Apache Flink. Firstly, a Directed Acyclic Graph (DAG) was transformed to a flow network by defining the capacity and flow of every edge and a capacity detection algorithm was used to ascertain the capacity value of every edge. Secondly, a maximum flow algorithm was used to acquire the improved network and the optimization path in order to promote the throughput of cluster when the data input rate is increasing; meanwhile the feasibility of the algorithm was proved by evaluating its time-space complexity. Finally, the influence of an important parameter on the algorithm execution was discussed and recommended parameter values of different types of jobs were obtained by experiments. The experimental results show that the throughput promoting rate of the strategy is higher than 16.12% during the increasing phases of the data input rate in different types of benchmarks compared with the original dispatching strategy of Apache Flink, so the dynamic dispatching strategy efficiently promotes the throughput of cluster under the premise of task latency constraint.
Reference | Related Articles | Metrics
New post quantum authenticated key exchange protocol based on ring learning with errors problem
LI Zichen, XIE Ting, CAI Juliang, ZHANG Xiaowei
Journal of Computer Applications    2018, 38 (8): 2243-2248.   DOI: 10.11772/j.issn.1001-9081.2018020387
Abstract517)      PDF (1082KB)(372)       Save
In view of the fact that the rapid development of quantum computer technology poses serious threat to the security of the traditional public-key cryptosystem, a new authenticated key exchange protocol scheme based on Ring Learning With Errors (RLWE) problem was proposed. By using Peikert error reconciliation mechanism, both parties of communication can directly obtain the shared bit value of the uniform distribution and get the same session key. The encoding bases of lattice was used to analyze the error tolerance, and reasonable parameters were selected to ensure that both parties can get the same session key with significant probability. The security of the protocol was proved in the BR (Bellare-Rogaway) model with weak perfect forward secrecy. The security of the protocol was attributed to the difficult RLWE problem of lattice, so that the protocol can resist quantum attacks. Compared with the existing authenticated key exchange protocols based on RLWE, the size of the parameter value modulus decreases from sub-exponential to polynomial magnitude, thus the corresponding amount of computation and communication are also significantly reduced. The results show that the proposed scheme is a more concise and efficient post quantum authenticated key exchange protocol.
Reference | Related Articles | Metrics
Task scheduling strategy based on topology structure in Storm
LIU Su, YU Jiong, LU Liang, LI Ziyang
Journal of Computer Applications    2018, 38 (12): 3481-3489.   DOI: 10.11772/j.issn.1001-9081.2018040741
Abstract904)      PDF (1471KB)(424)       Save
In order to solve the problems of large communication cost and unbalanced load in the default round-robin scheduling strategy of Storm stream computing platform, a Task Scheduling Strategy based on Topology Structure (TS 2) in Storm was proposed. Firstly, the work nodes with sufficient and available Central Processing Unit (CPU) resources were selected and only a process was allocated to each work node to eliminate the communication cost between processes within the nodes and optimize the process deployment. Then, the topology structure was analyzed, the component with the biggest degree in the topology was found and the thread of the component was assigned with the highest priority. Finally, under the condition of the maximum number of threads that a node could carry, the associated tasks were deployed to the same node as far as possible to reduce the communication cost between nodes, improve the load balance of cluster and optimize the thread deployment. The experimental results show that, in terms of system latency, the average optimization rate of TS 2 is 16.91% and 5.69% respectively compared with Storm default scheduling strategy and offline scheduling strategy, which effectively improves the real-time performance of system. Additionally, compared with the Storm default scheduling strategy, the communication cost between nodes of TS 2 is reduced by 15.75% and its average throughput is improved by 14.21%.
Reference | Related Articles | Metrics
Dynamic data stream load balancing strategy based on load awareness
LI Ziyang, YU Jiong, BIAN Chen, WANG Yuefei, LU Liang
Journal of Computer Applications    2017, 37 (10): 2760-2766.   DOI: 10.11772/j.issn.1001-9081.2017.10.2760
Abstract815)      PDF (1299KB)(887)       Save
Concerning the problem of unbalanced load and incomplete comprehensive evaluation of nodes in big data stream processing platform, a dynamic load balancing strategy based on load awareness algorithm was proposed and applied to a data stream processing platform named Apache Flink. Firstly, the computational delay time of the nodes was obtained by using the depth-first search algorithm for the Directed Acyclic Graph (DAG) and regarded as the basis for evaluating the performance of the nodes, and the load balancing strategy was created. Secondly, the load migration technology for data stream was implemented based on the data block management strategy, and both the global and local load optimization was implemented through feedback. Finally, the feasibility of the algorithm was proved by evaluating its time-space complexity, meanwhile the influence of important parameters on the algorithm execution was discussed. The experimental results show that the proposed algorithm increases the efficiency of the task execution by optimizing the load sharing between nodes, and the task execution time is shortened by 6.51% averagely compared with the traditional load balancing strategy of Apache Flink.
Reference | Related Articles | Metrics
Attack graph based risk assessment method for cyber security of cyber-physical system
WU Wenbo, KANG Rui, LI Zi
Journal of Computer Applications    2016, 36 (1): 203-206.   DOI: 10.11772/j.issn.1001-9081.2016.01.0203
Abstract602)      PDF (608KB)(486)       Save
Recent incidents such as the Stuxnet worm have shown that cyber attacks can cause serious physical damage of Cyber-Physical System (CPS). Aiming at this problem, a risk assessment method based on attack graph was proposed. Firstly, the attack behavior of CPS was analyzed and the result showed that the vulnerabilities in physical devices such as Programmable Logic Controller (PLC) were the keys of cross-domain attack. Then the utilization modes and impact of vulnerabilities were described. Secondly, the risk assessment model was proposed as well as the successful-attack-probability index and the attack-impact index. Furthermore, the successful-attack-probability index was calculated considering the intrinsic characteristics of vulnerabilities and the ability of attacker. The attack-impact index was calculated considering the host importance and the utilization mode of vulnerabilities. The method was developed to assess the cyber layer and physical layer as a whole system and the impact of multiple cross-domain attacks on system risk was considered. The numerical examples show that the risk of combined attack is five times the risk of a single attack and the risk value obtained is more accurate.
Reference | Related Articles | Metrics
Path planning algorithm based on regional-advance strategy for aircraft fuel tank inspection robot
NIU Guochen ZHANG Weicheng LI Ziwei
Journal of Computer Applications    2014, 34 (8): 2415-2418.   DOI: 10.11772/j.issn.1001-9081.2014.08.2415
Abstract258)      PDF (528KB)(449)       Save

To get a path for a continuum robot in the environment like the aircraft fuel tank, a path planning algorithm based on regional-advance strategy was proposed. By combining with the mechanical constraints of the robot, the method could ensure that arbitrary points can be reached in the single cabin. With the flexibility of movement, but the hyper-redundant freedom degree of the continuum robot brings about both the multiple path solutions in three-dimensional space and high time complexity. The approach based on dimension reduction, which is transforming the planning in three-dimensional space into that in two-dimensional plane, was presented to reduce the computing complexity. The single cabin of the aircraft fuel tank was divided to two regions, and the planning strategy was determined by the regional location of the target point. Finally, the Matlab simulation experiments were carried out, and the practicability and effectiveness of the proposed method were verified.

Reference | Related Articles | Metrics
Lattice signature and its application based on small integer solution problem
CAO Jie YANG Yatao LI Zichen
Journal of Computer Applications    2014, 34 (1): 78-81.   DOI: 10.11772/j.issn.1001-9081.2014.01.0078
Abstract517)      PDF (591KB)(451)       Save
A lattice signature scheme was proposed and some parameter choosing rules were illustrated concerning Small Integer Solution (SIS) problem and random oracle model of lattice. Then the results of the length of the keys that were generated under different parameter circumstances were compared. Afterwards the security and efficiency with the signature scheme were verified. At last, for the purpose of fairness, and reliability in multipartite authentication, the signature scheme was combined with key distribution and escrow, a new authentication scheme with the Singular Value Decomposition (SVD) algorithm based on mathematical matrix decomposition theory was proposed.
Related Articles | Metrics